Frontiers in Artificial Intelligence
○ Frontiers Media SA
All preprints, ranked by how well they match Frontiers in Artificial Intelligence's content profile, based on 11 papers previously published here. The average preprint has a 0.08% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Eze, C. D.; Hansen, R.; Abate, M.; Smith, G.; Al-Mamun, M. A.
Show abstract
Between 2010 and 2021, fentanyl and stimulants co-involved deaths increased from 0.6% to 32.3% of all overdose deaths in the U.S. The Centers for Disease Control and Prevention monitors overdose deaths, but reports are delayed by about 4 -6 months. Therefore, advanced methods are needed for optimized trend monitoring and preparing the healthcare system. We developed and compared traditional and machine learning (ML)-based time series prediction models for forecasting opioid and stimulant-involved death rates. Forensic research data (2015 to 2023) built from the West Virginia (WV) Office of the Chief Medical Examiner data were used for this study. Decedents with any opioid or any stimulant-involved death were identified and placed into three cohorts [boxh] opioid-only, stimulant-only, and opioid and stimulant co-involved deaths. Monthly death rate per 100,000 was calculated for each cohort using total cases per month and West Virginia population data from Census Bureau. Autoregressive Integrated Moving Average (ARIMA), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) variant models (differenced, non-differenced, and blended) models were trained on 80% of each cohorts time-ordered data. An iterative forecasting of the 20% testing data was conducted. Model performance on the test prediction was evaluated using calculated metrics such as root mean square error (RMSE), R2, mean absolute error (MAE), and mean absolute percentage error (MAPE) values. Counts and percentages of cases per year were obtained for each cohort. Death rate and model predictions were represented in time series. Models performance for each cohort were compared using the performance metrics. 10,812 cases were identified from 2015 to 2023 with 4,295 involving opioid-only, 1,392 involving stimulant-only, and 4,175 co-involving an opioid and a stimulant. Stimulant-only and opioid and stimulant co-involved death rates had an upward trend with a peak in opioid and stimulant co-involved death in 2021. Although opioid-only death rate had a downward trend over time, the death rate peaked in 2020. The non- differenced XGBoost model outperformed for opioid-only (R2 = 0.92, RMSE = 0.12, MAE = 0.10, MAPE = 6.59%) and simulant-only (R2 = 0.91, RMSE = 0.07, MAE = 0.06, MAPE = 7.35%) death rate prediction. The blended XGBoost model had the best performance for opioid and stimulant co-involved death rate prediction (R2 = 0.78, RMSE = 0.31, MAE = 0.27, MAPE = 8.87%). Differenced XGBoost models outperformed other models for short term forecasting, while the non-different variants performed better for long-term predictions. Machine learning models, especially, the XGBoost variants outperformed other models for predicting opioid-only, stimulant-only, and opioid and stimulant co-involved death rates, respectively. The differenced models can be used for early death rate signal detection while the non-differenced XGBoost models can aid long-term forecasts for overdose death monitoring, planning and allocation of resources in health systems. Author summaryThe United States has been faced with the problem of opioid abuse and overdose death for several decades. Currently, there is a rise in drug overdose deaths co-involving an opioid and a stimulant. Although the CDC monitors and produces a provisional overdose death count, this report is often delayed by 4-6 months. There is a need to develop a high accuracy predictive tool that can yield reliable forecasts of these overdose deaths that can be used to guide policy decisions and avoid the delay. Here we developed and compared machine learning (Extreme gradient boosting (XGBoost) and Random Forest) to traditional statistical (ARIMA) forecasting models for predicting overdose death rates involving an opioid, a stimulant, or both. We found that the XGBoost models performed better than ARIMA and Random Forest for making the predictions. Our study provides a tool that can be used to predict future overdose deaths and provide information to prepare health systems and communities to better respond to overdose deaths and develop policies targeting drug, especially opioid and stimulant overdose prevention.
Biswas, M. H. A.; Islam, M. A. I.
Show abstract
The present world is facing a devastating reality as drug abuse prevails in every comer of a society. The progress of a country is obstructed due to the excessive practice of taking drugs by the young generation. Like other countries, Bangladesh is also facing this dreadful situation. The multiple use of drug substances leads an individual to a sorrowful destination and for this reason, the natural behavior of human mind is disrupted. An addicted individual may regain his normal life by proper monitoring and treatment. The objective of this study is to analyze a mathematical model on the dynamics of drug abuse in the perspective of Bangladesh and reduce the harmful consequences with effective control policies using the idea of optimal control theory. The model has been solved analytically introducing a specific optimal goal. Numerical simulations have also been performed to review the behaviors of analytical findings. The analytical results have been verified with the numerical simulations. The analysis of this paper shows that it is possible to control drug addiction if there is less interaction among general people with the addicted individuals. Family based care, proper medical treatment, awareness and educational programs can be the most effective ways to reduce the adverse effects of drug addiction in a shortest possible time.
Ford, A.; Lan, J.; Ng, K.
Show abstract
ObjectivesFirstly, we aimed to develop a system capable of detecting multiple cardiac abnormalities simultaneously from 12-lead ECG recordings. Secondly, we tried to improve the detection by analyzing the relationship between imbalanced datasets and optimal classification thresholds. MethodsA novel fusion of Convolution Positional Encoder and Transformer Encoder was used to solve the multi-label classification problem. We used a proper evaluation metric called area under the precision-recall curve (AUPRC) that enabled us to analyze the precision-recall trade-off and find the optimal thresholds. ResultsHaving outperformed other popular deep networks, the model achieved the highest AUPRC of 0.96 and f1-score of 0.90 on 42511-sample datasets. We also found the negative correlation coefficient of -0.68 be-tween optimal thresholds and the proportion of positive samples. Significance &ConclusionThis study compared the performances of different deep learning architectures on a medical problem and showed the potential of advanced techniques in capturing spatial, temporal features alongside attention mechanisms. It also introduced how to reduce the impact of imbalanced datasets and find optimal classification thresholds.
Habib, P.; Alsamman, A.; Hassanein, S.; Shereif, G.; Hamwieh, A.
Show abstract
in 2019, estimated New Cases 268.600, Breast cancer has one of the most common cancers and is one of the worlds leading causes of death for women. Classification and data mining is an efficient way to classify information. Particularly in the medical field where prediction techniques are commonly used for early detection and effective treatment in diagnosis and research.These paper tests models for the mammogram analysis of breast cancer information from 23 of the more widely used machine learning algorithms such as Decision Tree, Random forest, K-nearest neighbors and support vector machine. The spontaneously splits results are distributed from a replicated 10-fold cross-validation method. The accuracy calculated by Regression Metrics such as Mean Absolute Error, Mean Squared Error, R2 Score and Clustering Metrics such as Adjusted Rand Index, Homogeneity, V-measure.accuracy has been checked F-Measure, AUC, and Cross-Validation. Thus, proper identification of patients with breast cancer would create care opportunities, for example, the supervision and the implementation of intervention plans could benefit the quality of long-term care. Experimental results reveal that the maximum precision 100%with the lowest error rate is obtained with Ada-boost Classifier.
Beluzo, C. E.; Alves, L. C.; Bresan, R. C.; Arruda, N. M.; Sovat, R. B.; Carvalho, T.
Show abstract
Infant mortality is a reflection of a complex combination of biological, socioeconomic and health care factors that require various data sources for a thorough analysis. Consequently, the use of specialized tools and techniques to deal with a large volume of data is extremely helpful. Machine learning has been applied to solve problems from many domains and presents great potential for the proposed problem, which would be an innovation in Brazilian reality. In this paper, an innovative method is proposed to perform a neonatal death risk assessment using computer vision techniques. Using mother, pregnancy care and child at birth features, from a dataset containing neonatal samples from Sao Paulo city public health data, the proposed method encodes images features and uses a custom convolutional neural network architecture to classification. Experiments show that the method is able to detect death samples with accuracy of 90.61%.
Pourhomayoun, M.; Shakibi, M.
Show abstract
In the wake of COVID-19 disease, caused by the SARS-CoV-2 virus, we designed and developed a predictive model based on Artificial Intelligence (AI) and Machine Learning algorithms to determine the health risk and predict the mortality risk of patients with COVID-19. In this study, we used documented data of 117,000 patients world-wide with laboratory-confirmed COVID-19. This study proposes an AI model to help hospitals and medical facilities decide who needs to get attention first, who has higher priority to be hospitalized, triage patients when the system is overwhelmed by overcrowding, and eliminate delays in providing the necessary care. The results demonstrate 93% overall accuracy in predicting the mortality rate. We used several machine learning algorithms including Support Vector Machine (SVM), Artificial Neural Networks, Random Forest, Decision Tree, Logistic Regression, and K-Nearest Neighbor (KNN) to predict the mortality rate in patients with COVID-19. In this study, the most alarming symptoms and features were also identified. Finally, we used a separate dataset of COVID-19 patients to evaluate our developed model accuracy, and used confusion matrix to make an in-depth analysis of our classifiers and calculate the sensitivity and specificity of our model.
Bandyopadhyay, S. K.; Dutta, S.
Show abstract
The inflammation of the liver is termed as Hepatitis. Several different types of hepatitis are from A to G. For example, Hepatitis A is caused by the hepatitis A virus. Similarly, other type of Hepatitis virus is formed by the name, say Hepatitis G. Some types of virus will not create any serious problems. LongAlasting and cause scarring of the liver, loss of liver function and in some cases, liver cancer are also caused by this disease. Voting ensemble based approach is proposed in this paper as final phase classification that accepts top two classifier models obtained from first and second phase classification respectively. The reason of using the proposed classifier is to enhance the prediction performance so that patients with hepatitis disease are identified correctly.
Owusu-Adjei, M.; Ben Hayfron-Acquah, J.; Frimpong, T.; Abdul-Salaam, G.
Show abstract
BackgroundFocus on predictive algorithm and its performance evaluation is extensively covered in most research studies. Best predictive models offer Optimum prediction solutions in the form of prediction accuracy scores, precision, recall etc. Prediction accuracy score from performance evaluation have been used as a determining factor for appropriate model recommendations use. It is one of the most widely used metric for identifying optimal prediction solutions irrespective of context or nature of dataset, size and output class distributions between the minority and majority variables. The key research question however is the impact of using prediction accuracy as compared to balanced accuracy in the determination of model performance in healthcare and other real-world application systems. Answering this question requires an appraisal of current state of knowledge in both prediction accuracy and balanced accuracy use in real-world applications including a search for related works that highlight appropriate machine learning methodologies and techniques. Materials and methodsA systematic review of related research works through an adopted search strategy protocol for relevant literature with a focus on the following characteristics; current state of knowledge with respect to ML techniques, applications and evaluations, research works with prediction accuracy score as an evaluation metric, research works in real-world context with appropriate methodologies. Excluded from this review search is defining specific search timelines and the motivation for not specifying search period was to include as many important works as possible irrespective of its date of publication. Of particular interest was related works on healthcare systems and other real-world applications (spam detections, fraud predictions, risk predictions etc). ResultsObservations from the related literature used indicate extensive use of machine learning techniques in real-world applications. Predominantly used machine learning techniques were Random forest, Support vector machine, Logistic regression, K-Nearest Neighbor, Decision trees, Gradient boosting classifier and some few ensemble techniques. The use of evaluation performance metrics such as precision, recall, f1-score, prediction accuracy and in some few instances; predicted positive and predicted negative values as justification for best model recommendation is also noticed. Of interest is the use of prediction accuracy as a predominant metric for assessing model performance among all the related literature works indentified. ConclusionsIn the light of challenges identified with the use of prediction accuracy as a performance measure for best model predictions, we propose a novel evaluation approach for predictive modeling use within healthcare systems context called PMEA (Proposed Model Evaluation Approach) which can be generalized in similar contexts. PMEA, addresses challenges for the use of prediction accuracy with balanced accuracy score derived from two most important evaluation metrics (True positive rates and True negative rates: TPR, TNR) to estimate more accurately best model performance in context. Identifying an appropriate evaluation metric for performance assessment will ensure a true determination of best performing prediction model for recommendation.
Mojrian, S.; Pinter, G.; Hassannataj Joloudari, J.; Felde, I.; Szabo-Gali, A.; Nadai, L.; MOSAVI, A.
Show abstract
Mammography is often used as the most common laboratory method for the detection of breast cancer, yet associated with the high cost and many side effects. Machine learning prediction as an alternative method has shown promising results. This paper presents a method based on a multilayer fuzzy expert system for the detection of breast cancer using an extreme learning machine (ELM) classification model integrated with radial basis function (RBF) kernel called ELM-RBF, considering the Wisconsin dataset. The performance of the proposed model is further compared with a linear-SVM model. The proposed model outperforms the linear-SVM model with RMSE, R2, MAPE equal to 0.1719, 0.9374 and 0.0539, respectively. Furthermore, both models are studied in terms of criteria of accuracy, precision, sensitivity, specificity, validation, true positive rate (TPR), and false-negative rate (FNR). The ELM-RBF model for these criteria presents better performance compared to the SVM model.
Sameer, M.; Gupta, B.
Show abstract
BackgroundMachine learning (ML) has paved the way for scientists to develop effective computer-aided diagnostic (CAD) systems. In recent years, epileptic seizure detection using Electroencephalogram (EEG) data and deep learning models has gained much attention. However, in deep learning networks, the bottleneck is a large number of learnable parameters. MethodIn this study, a novel approach comprising a 1D-Convolutional Neural Network (CNN) model for feature extraction followed by classical-quantum hybrid layers for classification purpose has been proposed. The proposed technique has only 745 learning parameters, which is the least reported to date. ResultThe proposed method has achieved a maximum accuracy, sensitivity, and specificity of 100% for binary classification on the Bonn EEG dataset. In addition, the noise robustness of the proposed model has also been checked. To the best of the authors knowledge, this is the first study to employ quantum machine learning (QML) to detect epileptic seizures. ConclusionThus, the developed hybrid system will help neurologists to detect seizures in online mode.
Chang, X.; Yi, J.; Peng, G.
Show abstract
In cardiology, the classification of electrocardiograms (ECGs) or heartbeats serves as a vital instrument. Techniques grounded in deep learning for ECG signal examination support medical professionals in swiftly identifying heart ailments, thereby aiding in life preservation. The present investigation endeavors to convert a dataset comprising ECG record images into time-series signals, followed by the implementation of deep learning (DL) methodologies on this transformed dataset. Cutting-edge DL methodologies are introduced for categorizing ECG signals across diverse cardiac categories. This work examines and juxtaposes various DL architectures, encompassing a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a self-supervised learning framework leveraging autoencoders. Training of these models occurs on a dataset derived from ECG tracings of individuals at multiple medical facilities in Pakistan. Initially, the ECG images undergo digitization with segmentation of lead II heartbeats, after which the resulting signals are inputted into the advocated DL models for categorization. Within the array of DL models evaluated herein, the advocated CNN architecture attains the peak accuracy of > 90%. This architecture exhibits superior precision and expedited inference, facilitating instantaneous and unmediated surveillance of ECG signals acquired via electrodes (sensors) positioned on various bodily regions. Employing the digitized variant of ECG signals, as opposed to pictorial representations, for cardiac arrhythmia categorization empowers cardiologists to deploy DL models directly onto signals emanating from ECG apparatus, enabling contemporaneous and precise ECG oversight.
Aldhyani, T. H. H.; Alrasheed, M.; Alqarn, A. i. A.; Alzahrani, M. Y.; Alahmadi, A. H. ,
Show abstract
According to WHO, more than one million individuals are infected with COVID-19, and around 20000 people have died because of this infectious disease around the world. In addition, COVID-19 epidemic poses serious public health threat to the world where people with little or no pre-existing human immunity can be more vulnerable to the effects of the effects of the coronavirus. Thus, developing surveillance systems for predicting COVID-19 pandemic in an early stage saves millions of lives. In this study, the deep learning algorithm and Holt-trend model is proposed to predict coronavirus. The Long-Short Term Memory (LSTM) algorithm and Holt-trend were applied to predict confirmed numbers and death cases. The real time data have been collected from the World Health Organization (WHO). In the proposed research, we have considered three countries to test the proposed model namely Saudi Arabia, Spain and Italy. The results suggest that the LSTM models showed better performance in predicting the cases of coronavirus patients. Standard measure performance MSE, RMSE, Mean error and correlation are employed to estimate the results of the proposed models. The empirical results of the LSTM by using correlation metric are 99.94%, 99.94% and 99.91 to predict number of confirmed cases on COVID-19 in three countries. Regarding the prediction results of LSTM model to predict the number of death on COVID-19 are 99.86%, 98.876% and 99.16 with respect to the Saudi Arabia, Italy and Spain respectively. Similarly the experimented results of Holt-Trend to predict the number of confirmed cases on COVID-19 by using correlation metrics are 99.06%, 99.96% and 99.94, whereas the results of Holt-Trend to predict the number of death cases are 99.80%, 99.96 and 99.94 with respect to the Saudi Arabia, Italy and Spain respectively. The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of COVID-19 in these countries. Such findings provide better insights about the future of COVID-19 in general. The results were obtained by applying the time series models which needs to be considered for the sake of saving the lives of many people.
Santos, D.
Show abstract
Data from the World Health Organization (WHO) indicate a worldwide occurrence of 2 to 3 million cases of non-melanoma skin cancer annually. The American Cancer Society estimates that the incidence reaches 5.4 million in the United States alone. In cases of fatal diseases, early detection received great attention from the population and the media due to the premise that the earlier a cancer is identified, the greater the chances of cure. It is to be believed that the application of automated methods will help in early diagnosis, especially with the set of images with a variety of diagnoses. Thus, this article presents a system for recognizing dermatological diseases through images with lesions, a machine intervention in contrast to conventional detection based on medical personnel. Our model is designed in three phases, committing to data collection and augmentation, model development, and finally, prediction. We used various AI algorithms such as ANN with image processing tools to form a better structure, leading to higher accuracy of 89%. Contactdheiver.santos@ictbridge.org, dheiver.santos@gmail.com
Santos, D.; Santos, E.
Show abstract
A brain tumor is understood by the scientific community as the growth of abnormal cells in the brain, some of which can lead to cancer. The traditional method to detect brain tumors is nuclear magnetic resonance (MRI). Having the MRI images, information about the uncontrolled growth of tissue in the brain is identified. In several research articles, brain tumor detection is done through the application of Machine Learning and Deep Learning algorithms. When these systems are applied to MRI images, brain tumor prediction is done very quickly and greater accuracy helps to deliver treatment to patients. These predictions also help the radiologist to make quick decisions. In the proposed work, a set of Artificial Neural Networks (ANN) are applied in the detection of the presence of brain tumor, and its performance is analyzed through different metrics.
Filatov, D.; Ahmad Hassan Yar, G. N.
Show abstract
The brain tumor is the most aggressive kind of tumor and can cause low life expectancy if diagnosed at the later stages. Manual identification of brain tumors is tedious and prone to errors. Misdiagnosis can lead to false treatment and thus reduce the chances of survival for the patient. Medical resonance imaging (MRI) is the conventional method used to diagnose brain tumors and their types. This paper attempts to eliminate the manual process from the diagnosis process and use machine learning instead. We proposed the use of pretrained convolutional neural networks (CNN) for the diagnosis and classification of brain tumors. Three types of tumors were classified with one class of non-tumor MRI images. Networks that has been used are ResNet50, EfficientNetB1, EfficientNetB7, EfficientNetV2B1. EfficientNet has shown promising results due to its scalable nature. EfficientNetB1 showed the best results with training and validation accuracy of 87.67% and 89.55% respectively.
Torky, M.
Show abstract
While the world is trying to get rid of the Covid 19 pandemic, the beginning of the monkeypox(MPX) pandemic has recently appeared and is threatening many countries of the world. MPX is a rare disease caused by infection with the MPX virus, and it is among the same family of pox viruses. The danger is that MPX causes pustules all over the body, which causes a revolting view to the body regions and works as a source of infection in case of skin contact between individuals. Pustules and rashes are common symptoms of many pox viruses and other skin diseases such as Measles, chicken pox, syphilis, Eczema, etc, Therefore, the medical and clinical diagnosis of monkeypox is one of the great challenges for doctors and specialists. In response to this need, Artificial intelligence can develop aid systems based on machine and deep learning algorithms for diagnosing these types of diseases based on datasets of skin images to those types of diseases. In this paper, a deep learning approach called Dense Net-121model is applied, tested, and compared with the convolution neural network (CNN) model for diagnosing monkeypox through a skin image dataset of MPX and Measles images. The most significant finding to emerge from this study is the superiority of the Dense Net-121 model over CNN in diagnosing MPX cases with a testing accuracy of 93%. These findings suggest a role for using more deep learning algorithms for accurately diagnosing MPX cases with bigger datasets of similar pustules and rashes diseases.
Medhi, k.; Jamil, M.; Hussain, I.
Show abstract
COVID-19 infection has created a panic across the globe in recent times. Early detection of COVID-19 infection can save many lives in the prevailing situation. This virus affects the respiratory system of a person and creates white patchy shadows in the lungs. Deep learning is one of the most effective Artificial Intelligence techniques to analyse chest X-ray images for efficient and reliable COVID-19 screening. In this paper, we have proposed a Deep Convolutional Neural Network method for fast and dependable identification of COVID-19 infection cases from the patient chest X-ray images. To validate the performance of the proposed system, chest X-ray images of more than 150 confirmed COVID-19 patients from the Kaggle data repository are used in the experimentation. The results show that the proposed system identifies the cases with an accuracy of 93%.
Bajaj, N. S.; Pardeshi, S. S.; Patange, A. D.; Kotecha, D.; Mate, K. K.
Show abstract
Since its origin in December 2019, Novel Coronavirus or COVID-19 has caused massive panic in the word by infecting millions of people with a varying fatality rate. The main objective of Governments worldwide is to control the extent of the outbreak until a vaccine or cure has been devised. Machine learning has been an efficient mechanism to train, map, analyze, and predict datasets. This paper aims to utilize regression, a supervised machine learning algorithm to assess time-series datasets of COVID-19 pandemic by performing comparative analysis on datasets of India and two Municipal Corporations of Maharashtra, namely, Mira-Bhayander and Akola. Current study is an attempt towards drawing attention to the dynamics and nature of the pandemic in a controlled locality such as Municipal Corporation; which differs from the exponential nature observed nationally. However, for limited area like the one considered the nature of curve is observed to be cubic for total cases and multi-peak Gaussian for active cases. In conclusion, Government should empower district/ corporations/local authorities to adopt their own methodology and decision-making policy to contain the pandemic at regional-level like the case study discussed herein.
Abbas, A.; Lex, J. R.; Toor, J.; Khalil, E.; Ravi, B.; Whyne, C.
Show abstract
AbstractO_ST_ABSBackgroundC_ST_ABSTotal hip and knee arthroplasties (THAs and TKAs) are some of the most common and successful surgeries. Predicting their duration of surgery (DOS) and length of stay (LOS) has massive implications for costs and resource management. The purpose of this study was to predict the DOS and LOS of THAs using machine learning models (MLMs) based on preoperative factors. MethodsThe American College of Surgeons (ACS) National Surgical and Quality Improvement (NSQIP) database was queried for elective unilateral THA procedures. Multiple MLMs were constructed to predict DOS and LOS. Models were evaluated according to mean squared error (MSE), buffer accuracy, and classification accuracy. To ensure useful predictions, the results of the models were compared to a mean regressor and previous MLM predictions for primary TKAs. Results196,942 patients were included. The neural network had the best MSE, buffer and training accuracies for both DOS and LOS. For DOS testing, the neural network MSE was 0.916, with the 30-minute buffer and [≤]120 min, >120 min accuracies being 75.4% and 88.5%. For LOS testing, the neural network MSE was 0.567, with the 1-day buffer and [≤]2 days, >2 accuracies being 70.3% and 80.9%. Slightly reduced performance was found for THA compared to TKA for DOS and LOS (3 to 5%), with similar important features identified. ConclusionMLMs based on preoperative factors successfully predicted the DOS and LOS of elective unilateral THAs, with similar performance to TKA. Future work should include operational factors to apply these models to real world resource optimization.
Hassanien, A. E.; Mahdy, L. N.; Ezzat, K. A.; Elmousalami, H. H.; Aboul Ella, H.
Show abstract
The early detection of SARS-CoV-2, the causative agent of (COVID-19) is now a critical task for the clinical practitioners. The COVID-19 spread is announced as pandemic outbreak between people worldwide by WHO since 11/ March/ 2020. In this consequence, it is top critical priority to become aware of the infected people so that prevention procedures can be processed to minimize the COVID-19 spread and to begin early medical health care of those infected persons. In this paper, the deep studying based totally methodology is usually recommended for the detection of COVID-19 infected patients using X-ray images. The help vector gadget classifies the corona affected X-ray images from others through usage of the deep features. The technique is useful for the clinical practitioners for early detection of COVID-19 infected patients. The suggested system of multi-level thresholding plus SVM presented high accuracy in classification of the infected lung with Covid-19. All images were of the same size and stored in JPEG format with 512 * 512 pixels. The average sensitivity, specificity, and accuracy of the lung classification using the proposed model results were 95.76%, 99.7%, and 97.48%, respectively.